perm filename WAGNER.1[LET,JMC]1 blob sn#685159 filedate 1982-11-01 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	@make(letterhead,Phone"497-4430",Who "dJohn McCarthy", Logo old, Department CSD)
C00014 ENDMK
CāŠ—;
@make(letterhead,Phone"497-4430",Who "dJohn McCarthy", Logo old, Department CSD)
@style(indent 8)
@begin(address)
Professor Steven Wagner
Department of Philosophy
University of Illinois
105 Gregory Hall
810 South Wright Street
Urbana, Illinois 61801
@end(address)
@greeting(Dear Professor Wagner:)
@begin(body)
	Thanks for the copies of your "Chinese Room ..." and
"A Philosopher Looks at AI".  I have some comments on the latter,
and I enclose two items that you might find relevant.

	1. I agree that AI may require greater effort than many
people suppose.  In my view it is no more an engineering problem
today than nuclear energy was before fission was discovered.
Conceptual advances must be made first.  Would that philosophers
would help make them.

	2. My own opinion is that the communication problem is
secondary to the representation problem which in turn is secondary
to formulating what it is that people know about the common sense
world - or alternatively, what general common sense facts and
what facts about particular situations must be known in order to
behave intelligently.  Perhaps the oppposite opinion is dominant
in AI, but more likely you think so, because (as your references
suggest) you have been consorting with the people who emphasize
natural language.

	My preliminary paper proposing a Common Business Communication
Language illustrates my opinion that even routine business communication
requires more semantics of natural language than is yet embodied
in computer programs and that the semantic problems can be separated
from the syntactic framework in which the computational linguists
have embedded them.  Incidentally, I have offered the following
challenge to Schankians and others who claim to have programs that
understand natural language.  Write a program that will accept from
me a question about "e.g., the population of a city" and determine
the answer by having their program telephone and interact with
the Lawrence Berkeley Laboratory data base of the 1970 census.  This
database is supposed to interact in English with its users.
The general idea is that a program that understands English should
be able to get information with other programs that also purport
to understand English.

	3. I agree with your three observations that start at
the bottom of page 3.
@newpage

	4. I have doubts about the six conditions starting on
page 7, especially in connection with the business communication
problem and with the problem of communicating with databases.
Conditions 1, 2, 3, and 6 are immediate in these cases, and 5
is a part of 4.  In many situations, the first three can be
assured without progress in AI, and 4 contains the difficulties.

	5. I think you are taking for granted an AI commitment to
frames.  I have always had doubts about them, because their use
assumes that there is always a single frame that should dominate
a person's or program's consideration of a situation and that
other frames enter subordinately.  This minimizes the possibility
that different pieces of information have equal weight.  Indeed
the frame paradigm seems to lead to over-specialized programs,
and even Schank is now moving toward "chunks" of information
that have more mutual independence.

	6. Who "observed" rather than claimed that "common sense
knowledge is far too extensive to be stored as a list of propositions"?
One cannot support such a claim unless one knows the possibility for
storing knowledge in general form.  For example, suppose I ask you
"Do you know whether it is possible to drive from Indianapolis to
St. Louis in less than a 8.7 hours"?  You may answer that you know that
it is possible even if you have never previously considered driving
between these two specific cities or for this specific time.  Exactly
what information is stored is presently unknown.  Even more puzzling,
suppose I ask you whether President Reagan is at noon on a particular
day standing, sitting or lying down.  You answer that you don't know
and resist my suggestion that you think harder.  How do you know
thinking harder won't help?

[Well the first bit is rather like your enchiladas and garnets example.
That one is well handled by supposing that what is stored in the
brain is independent information on the sizes of enchiladas and garnets,
and reasoning is used to get the answer needed].

	While it is conceivable that the common sense information
stored in the brain is as vast as you speculate, I think it is
unlikely, and it cannot readily be estimated.  What can be estimated
is the incremental amount of information required for an intelligent
human to become expert in some field, say a branch of mathematics.
This is small, especially when we notice that the texts are rather
redundant.

	7. Your point about psychological knowledge is well taken and
leads to some interesting questions for AI.  Namely, what can be determined
using the ability to step into another's shoes, and what requires general
psychological knowledge and reasoning.  Note that we don't put ourselves
into a "full psychological state" but simulate the other person's mind
much faster than in real time.
Putting oneself in another's shoes is
quite schematic, and seems to me that one is more using his ability
to guess how he himself would react to a hypothetical state of affairs
rather than using his "primary" reactive facilities.
When we work backwards and ask what could have put a person into
an observed state of mind we are using more than our ability to
simulate.  Other questions require general knowledge even when
they are about ourselves.

	8. The comparison between knowledge in data bases and books
is apt, and it is agreed that books don't think.  Let me point out,
however, that no-one has given any theoretical treatment to the
sense in which knowledge is "in" books, and therefore it may be
premature to jump to conclusions about what aspects of knowing
apply to data base programs.

	9. You may find the ideas I propose about ascribing mental
qualities to machines to be a form of liberal behaviorism, although
I don't like to think of it that way.

	10. re p. 25, we have to account both for what goes on in the
brain and its relation to the external world.  The arguments about
whether meaning is primarily defined in terms of one or the other don't
seem to have been very fruitful in philosophy - let alone in AI.

	11. With regard to the summary on p. 28, it isn't clear what
rate of progress you are talking about.  I have been in AI since
1949, and the progress has been slower than I would have hoped,
but it took 100 years from Mendel to the unraveling of the
genetic code.  But perhaps we agree that substantial progress in
representation is required before we can do much more with
communication.

	I hope you find some of the points in this opinionated
letter to be of interest.
@end(body)
Sincerely,




John McCarthy
Professor of Computer Science